83 research outputs found

    Keep Your Nice Friends Close, but Your Rich Friends Closer -- Computation Offloading Using NFC

    Full text link
    The increasing complexity of smartphone applications and services necessitate high battery consumption but the growth of smartphones' battery capacity is not keeping pace with these increasing power demands. To overcome this problem, researchers gave birth to the Mobile Cloud Computing (MCC) research area. In this paper we advance on previous ideas, by proposing and implementing the first known Near Field Communication (NFC)-based computation offloading framework. This research is motivated by the advantages of NFC's short distance communication, with its better security, and its low battery consumption. We design a new NFC communication protocol that overcomes the limitations of the default protocol; removing the need for constant user interaction, the one-way communication restraint, and the limit on low data size transfer. We present experimental results of the energy consumption and the time duration of two computationally intensive representative applications: (i) RSA key generation and encryption, and (ii) gaming/puzzles. We show that when the helper device is more powerful than the device offloading the computations, the execution time of the tasks is reduced. Finally, we show that devices that offload application parts considerably reduce their energy consumption due to the low-power NFC interface and the benefits of offloading.Comment: 9 pages, 4 tables, 13 figure

    Mobility models, mobile code offloading, and p2p networks of smartphones on the cloud

    Get PDF
    It was just a few years ago when I bought my first smartphone. And now, (almost) all of my friends possess at least one of these powerful devices. International Data Corporation (IDC) reports that smartphone sales showed strong growth worldwide in 2011, with 491.4 million units sold – up to 61.3 percent from 2010. Furthermore, IDC predicts that 686 million smartphones will be sold in 2012, 38.4 percent of all handsets shipped. Silently, we are becoming part of a big mobile smartphone network, and it is amazing how the perception of the world is changing thanks to these small devices. If many years ago the birth of Internet enabled the possibility to be online, smartphones nowadays allow to be online all the time. Today we use smartphones to do many of the tasks we used to do on desktops, and many new ones. We browse the Internet, watch videos, upload data on social networks, use online banking, find our way by using GPS and online maps, and communicate in revolutionary ways. Along with these benefits, these fancy and exciting devices brought many challenges to the research area of mobile and distributed systems. One of the first problems that captured our attention was the study of the network that potentially could be created by interconnecting all the smartphones together. Typically, these devices are able to communicate with each other in short distances by using com- munication technologies such as Bluetooth or WiFi. The network paradigm that rises from this intermittent communication, also known as Pocket Switched Network (PSN) or Opportunistic Network ([10, 11]), is seen as a key technology to provide innovative services to the users without the need of any fixed infrastructure. In PSNs nodes are short range communicating devices carried by humans. Wireless communication links are created and dropped in time, depending on the physical distance of the device holders. From one side, social relations among humans yield recurrent movement patterns that help researchers design and build protocols that efficiently deliver messages to destinations ([12, 13, 14] among others). The complexity of these social relations, from the other side, makes it difficult to build simple mobility models, that in an efficient way, generate large synthetic mobility traces that look real. Traces that would be very valuable in protocol validation and that would replace the limited experimentally gathered data available so far. Traces that would serve as a common benchmark to researchers worldwide on which to validate existing and yet to be designed protocols. With this in mind we start our study with re-designing SWIM [15], an already exist- ing mobility model shown to generate traces with similar properties of that of existing real ones. We make SWIM able to easily generate large (small)-scale scenarios, starting from known small (large)-scale ones. To the best of our knowledge, this is the first such study. In addition, we study the social aspects of SWIM-generated traces. We show how to SWIM-generate a scenario in which a specific community structure of nodes is required. Finally, exploiting the scaling properties of SWIM, we present the first analysis of the scal- ing capabilities of several forwarding protocols such as Epidemic [16], Delegation [13], Spray&Wait [14], and BUBBLE [12]. The first results of these works appeared in [1], and, at the time of writing, [2] is accepted with minor revision. Next, we take into account the fact that in PSNs cannot be assumed full cooperation and fairness among nodes. Selfish behavior of individuals has to be considered, since it is an inherent aspect of humans, the device holders (see [17], [18]). We design a market-based mathematical framework that enables heterogeneous mobile users in an opportunistic mobile network to compromise optimally and efficiently on their QoS 3 demands. The goal of the framework is to satisfy each user with its achieved (lesser) QoS, and at the same time maximize the social welfare of users in the network. We base our study on the consideration that, in practice, users are generally tolerant on accepting lesser QoS guarantees than what they demand, with the degree of tolerance varying from user to user. This study is described in details in Chapter 2 of this dissertation, and is included in [3]. In general, QoS could be parameters such as response time, number of computations per unit time, allocated bandwidth, etc. Along the way toward our study of the smartphone-world, there was the big advent of mobile cloud computing—smartphones getting help from cloud-enabled services. Many researchers started believing that the cloud could help solving a crucial problem regarding smartphones: improve battery life. New generation apps are becoming very complex — gaming, navigation, video editing, augmented reality, speech recognition, etc., — which require considerable amount of power and energy, and as a result, smartphones suffer short battery lifetime. Unfortunately, as a consequence, mobile users have to continually upgrade their hardware to keep pace with increasing performance requirements but still experience battery problems. Many recent works have focused on building frameworks that enable mobile computation offloading to software clones of smartphones on the cloud (see [19, 20] among others), as well as to backup systems for data and applications stored in our devices [21, 22, 23]. However, none of these address dynamic and scalability features of execution on the cloud. These are very important problems, since users may request different computational power or backup space based on their workload and deadline for tasks. Considering this and advancing on previous works, we design, build, and implement the ThinkAir framework, which focuses on the elasticity and scalability of the server side and enhances the power of mobile cloud computing by parallelizing method execution using multiple Virtual Machine (VM) images. We evaluate the system using a range of benchmarks starting from simple micro-benchmarks to more complex applications. First, we show that the execution time and energy consumption decrease two orders of magnitude for the N-queens puzzle and one order of magnitude for a face detection and a virus scan application, using cloud offloading. We then show that a parallelizable application can invoke multiple VMs to execute in the cloud in a seamless and on-demand manner such as to achieve greater reduction on execution time and energy consumption. Finally, we use a memory-hungry image combiner tool to demonstrate that applications can dynamically request VMs with more computational power in order to meet their computational requirements. The details of the ThinkAir framework and its evaluation are described in Chapter 4, and are included in [6, 5]. Later on, we push the smartphone-cloud paradigm to a further level: We develop Clone2Clone (C2C), a distributed platform for cloud clones of smartphones. Along the way toward C2C, we study the performance of device-clones hosted in various virtualization environments in both private (local servers) and public (Amazon EC2) clouds. We build the first Amazon Customized Image (AMI) for Android-OS—a key tool to get reliable performance measures of mobile cloud systems—and show how it boosts up performance of Android images on the Amazon cloud service. We then design, build, and implement Clone2Clone, which associates a software clone on the cloud to every smartphone and in- terconnects the clones in a p2p fashion exploiting the networking service within the cloud. On top of C2C we build CloneDoc, a secure real-time collaboration system for smartphone users. We measure the performance of CloneDoc on a testbed of 16 Android smartphones and clones hosted on both private and public cloud services and show that C2C makes it possible to implement distributed execution of advanced p2p services in a network of mobile smartphones. The design and implementation of the Clone2Clone platform is included in [7], recently submitted to an international conference. We believe that Clone2Clone not only enables the execution of p2p applications in a network of smartphones, but it can also serve as a tool to solve critical security problems. In particular, we consider the problem of computing an efficient patching strategy to stop worm spreading between smartphones. We assume that the worm infects the devices and spreads by using bluetooth connections, emails, or any other form of communication used by the smartphones. The C2C network is used to compute the best strategy to patch the smartphones in such a way that the number of devices to patch is low (to reduce the load on the cellular infrastructure) and that the worm is stopped quickly. We consider two well defined worms, one spreading between the devices and one attacking the cloud before moving to the real smartphones. We describe CloudShield [8], a suite of protocols running on the peer-to-peer network of clones; and show by experiments with two different datasets (Facebook and LiveJournal) that CloudShield outperforms state-of-the-art worm-containment mechanisms for mobile wireless networks. This work is done in collaboration with Marco Valerio Barbera, PhD colleague in the same department, who contributed mainly in the implementation and testing of the malware spreading and patching strategies on the different datasets. The communication between the real devices and the cloud, necessary for mobile com- putation offloading and smartphone data backup, does certainly not come for free. To the best of our knowledge, none of the works related to mobile cloud computing explicitly studies the actual overhead in terms of bandwidth and energy to achieve full backup of both data/applications of a smartphone, as well as to keep, on the cloud, up-to-date clones of smartphones for mobile computation offload purposes. In the last work during my PhD—again, in collaboration with Marco Valerio Barbera—we studied the feasibility of both mobile computation offloading and mobile software/data backup in real-life scenarios. This joint work resulted in a recent publication [9] but is not included in this thesis. As in C2C, we assume an architecture where each real device is associated to a software clone on the cloud. We define two types of clones: The off-clone, whose purpose is to support computation offloading, and the back-clone, which comes to use when a restore of user’s data and apps is needed. We measure the bandwidth and energy consumption incurred in the real device as a result of the synchronization with the off-clone or the back-clone. The evaluation is performed through an experiment with 11 Android smartphones and an equal number of clones running on Amazon EC2. We study the data communication overhead that is necessary to achieve different levels of synchronization (once every 5min, 30min, 1h, etc.) between devices and clones in both the off-clone and back-clone case, and report on the costs in terms of energy incurred by each of these synchronization frequencies as well as by the respective communication overhead. My contribution in this work is focused mainly on the experimental setup, deployment, and data collection

    Mobility models, mobile code offloading, and p2p networks of smartphones on the cloud

    Get PDF
    It was just a few years ago when I bought my first smartphone. And now, (almost) all of my friends possess at least one of these powerful devices. International Data Corporation (IDC) reports that smartphone sales showed strong growth worldwide in 2011, with 491.4 million units sold – up to 61.3 percent from 2010. Furthermore, IDC predicts that 686 million smartphones will be sold in 2012, 38.4 percent of all handsets shipped. Silently, we are becoming part of a big mobile smartphone network, and it is amazing how the perception of the world is changing thanks to these small devices. If many years ago the birth of Internet enabled the possibility to be online, smartphones nowadays allow to be online all the time. Today we use smartphones to do many of the tasks we used to do on desktops, and many new ones. We browse the Internet, watch videos, upload data on social networks, use online banking, find our way by using GPS and online maps, and communicate in revolutionary ways. Along with these benefits, these fancy and exciting devices brought many challenges to the research area of mobile and distributed systems. One of the first problems that captured our attention was the study of the network that potentially could be created by interconnecting all the smartphones together. Typically, these devices are able to communicate with each other in short distances by using com- munication technologies such as Bluetooth or WiFi. The network paradigm that rises from this intermittent communication, also known as Pocket Switched Network (PSN) or Opportunistic Network ([10, 11]), is seen as a key technology to provide innovative services to the users without the need of any fixed infrastructure. In PSNs nodes are short range communicating devices carried by humans. Wireless communication links are created and dropped in time, depending on the physical distance of the device holders. From one side, social relations among humans yield recurrent movement patterns that help researchers design and build protocols that efficiently deliver messages to destinations ([12, 13, 14] among others). The complexity of these social relations, from the other side, makes it difficult to build simple mobility models, that in an efficient way, generate large synthetic mobility traces that look real. Traces that would be very valuable in protocol validation and that would replace the limited experimentally gathered data available so far. Traces that would serve as a common benchmark to researchers worldwide on which to validate existing and yet to be designed protocols. With this in mind we start our study with re-designing SWIM [15], an already exist- ing mobility model shown to generate traces with similar properties of that of existing real ones. We make SWIM able to easily generate large (small)-scale scenarios, starting from known small (large)-scale ones. To the best of our knowledge, this is the first such study. In addition, we study the social aspects of SWIM-generated traces. We show how to SWIM-generate a scenario in which a specific community structure of nodes is required. Finally, exploiting the scaling properties of SWIM, we present the first analysis of the scal- ing capabilities of several forwarding protocols such as Epidemic [16], Delegation [13], Spray&Wait [14], and BUBBLE [12]. The first results of these works appeared in [1], and, at the time of writing, [2] is accepted with minor revision. Next, we take into account the fact that in PSNs cannot be assumed full cooperation and fairness among nodes. Selfish behavior of individuals has to be considered, since it is an inherent aspect of humans, the device holders (see [17], [18]). We design a market-based mathematical framework that enables heterogeneous mobile users in an opportunistic mobile network to compromise optimally and efficiently on their QoS 3 demands. The goal of the framework is to satisfy each user with its achieved (lesser) QoS, and at the same time maximize the social welfare of users in the network. We base our study on the consideration that, in practice, users are generally tolerant on accepting lesser QoS guarantees than what they demand, with the degree of tolerance varying from user to user. This study is described in details in Chapter 2 of this dissertation, and is included in [3]. In general, QoS could be parameters such as response time, number of computations per unit time, allocated bandwidth, etc. Along the way toward our study of the smartphone-world, there was the big advent of mobile cloud computing—smartphones getting help from cloud-enabled services. Many researchers started believing that the cloud could help solving a crucial problem regarding smartphones: improve battery life. New generation apps are becoming very complex — gaming, navigation, video editing, augmented reality, speech recognition, etc., — which require considerable amount of power and energy, and as a result, smartphones suffer short battery lifetime. Unfortunately, as a consequence, mobile users have to continually upgrade their hardware to keep pace with increasing performance requirements but still experience battery problems. Many recent works have focused on building frameworks that enable mobile computation offloading to software clones of smartphones on the cloud (see [19, 20] among others), as well as to backup systems for data and applications stored in our devices [21, 22, 23]. However, none of these address dynamic and scalability features of execution on the cloud. These are very important problems, since users may request different computational power or backup space based on their workload and deadline for tasks. Considering this and advancing on previous works, we design, build, and implement the ThinkAir framework, which focuses on the elasticity and scalability of the server side and enhances the power of mobile cloud computing by parallelizing method execution using multiple Virtual Machine (VM) images. We evaluate the system using a range of benchmarks starting from simple micro-benchmarks to more complex applications. First, we show that the execution time and energy consumption decrease two orders of magnitude for the N-queens puzzle and one order of magnitude for a face detection and a virus scan application, using cloud offloading. We then show that a parallelizable application can invoke multiple VMs to execute in the cloud in a seamless and on-demand manner such as to achieve greater reduction on execution time and energy consumption. Finally, we use a memory-hungry image combiner tool to demonstrate that applications can dynamically request VMs with more computational power in order to meet their computational requirements. The details of the ThinkAir framework and its evaluation are described in Chapter 4, and are included in [6, 5]. Later on, we push the smartphone-cloud paradigm to a further level: We develop Clone2Clone (C2C), a distributed platform for cloud clones of smartphones. Along the way toward C2C, we study the performance of device-clones hosted in various virtualization environments in both private (local servers) and public (Amazon EC2) clouds. We build the first Amazon Customized Image (AMI) for Android-OS—a key tool to get reliable performance measures of mobile cloud systems—and show how it boosts up performance of Android images on the Amazon cloud service. We then design, build, and implement Clone2Clone, which associates a software clone on the cloud to every smartphone and in- terconnects the clones in a p2p fashion exploiting the networking service within the cloud. On top of C2C we build CloneDoc, a secure real-time collaboration system for smartphone users. We measure the performance of CloneDoc on a testbed of 16 Android smartphones and clones hosted on both private and public cloud services and show that C2C makes it possible to implement distributed execution of advanced p2p services in a network of mobile smartphones. The design and implementation of the Clone2Clone platform is included in [7], recently submitted to an international conference. We believe that Clone2Clone not only enables the execution of p2p applications in a network of smartphones, but it can also serve as a tool to solve critical security problems. In particular, we consider the problem of computing an efficient patching strategy to stop worm spreading between smartphones. We assume that the worm infects the devices and spreads by using bluetooth connections, emails, or any other form of communication used by the smartphones. The C2C network is used to compute the best strategy to patch the smartphones in such a way that the number of devices to patch is low (to reduce the load on the cellular infrastructure) and that the worm is stopped quickly. We consider two well defined worms, one spreading between the devices and one attacking the cloud before moving to the real smartphones. We describe CloudShield [8], a suite of protocols running on the peer-to-peer network of clones; and show by experiments with two different datasets (Facebook and LiveJournal) that CloudShield outperforms state-of-the-art worm-containment mechanisms for mobile wireless networks. This work is done in collaboration with Marco Valerio Barbera, PhD colleague in the same department, who contributed mainly in the implementation and testing of the malware spreading and patching strategies on the different datasets. The communication between the real devices and the cloud, necessary for mobile com- putation offloading and smartphone data backup, does certainly not come for free. To the best of our knowledge, none of the works related to mobile cloud computing explicitly studies the actual overhead in terms of bandwidth and energy to achieve full backup of both data/applications of a smartphone, as well as to keep, on the cloud, up-to-date clones of smartphones for mobile computation offload purposes. In the last work during my PhD—again, in collaboration with Marco Valerio Barbera—we studied the feasibility of both mobile computation offloading and mobile software/data backup in real-life scenarios. This joint work resulted in a recent publication [9] but is not included in this thesis. As in C2C, we assume an architecture where each real device is associated to a software clone on the cloud. We define two types of clones: The off-clone, whose purpose is to support computation offloading, and the back-clone, which comes to use when a restore of user’s data and apps is needed. We measure the bandwidth and energy consumption incurred in the real device as a result of the synchronization with the off-clone or the back-clone. The evaluation is performed through an experiment with 11 Android smartphones and an equal number of clones running on Amazon EC2. We study the data communication overhead that is necessary to achieve different levels of synchronization (once every 5min, 30min, 1h, etc.) between devices and clones in both the off-clone and back-clone case, and report on the costs in terms of energy incurred by each of these synchronization frequencies as well as by the respective communication overhead. My contribution in this work is focused mainly on the experimental setup, deployment, and data collection

    A Roadmap to Gamify Programming Education

    Get PDF
    Learning programming relies on practicing it which is often hampered by the barrier of difficulty. The combined use of automated assessment, which provides fast feedback to the students experimenting with their code, and gamification, which provides additional motivation for the students to intensify their learning effort, can help pass the barrier of difficulty in learning programming. In such environment, students keep receiving the relevant feedback no matter how many times they try (thanks to automated assessment), and their engagement is retained (thanks to gamification). While there is a number of open software and programming exercise collections supporting automated assessment, up to this date, there are no available open collections of gamified programming exercises, no open interactive programming learning environment that would support such exercises, and even no open standard for the representation of such exercises so that they could be developed in different educational institutions and shared among them. This gap is addressed by Framework for Gamified Programming Education (FGPE), an international project whose primary objective is to provide necessary prerequisites for the application of gamification to programming education, including a dedicated gamification scheme, a gamified exercise format and exercises conforming to it, software for editing the exercises and an interactive learning environment capable of presenting them to students. This paper presents the FGPE project, its architecture and main components, as well as the results achieved so far

    A User-Centric Diversity by Design Recommender System for the Movie Application Domain

    Get PDF

    Performance Analysis of the IOTA Chrysalis on Heterogeneous Devices

    Get PDF
    Existing Distributed Ledger Technologies (DLTs) based models like blockchain pose scalability and performance challenges for IoT systems due to resource-demanding Proof of Work (PoW), slow transaction confirmation rates, and high costs. Against a need to adopt a viable approach, especially for low-power IoT devices, IOTA emerges as a promising technology, leveraging the Direct Acyclic Graph (DAG) based approach called Tangle for IoT-focused applications. In this paper, we design a system enabling secure data exchange between IoT devices on IOTA Chrysalis, the latest version. We perform extensive experiments on two machines, a Workstation PC and Raspberry Pi, to demonstrate the performance gap between powerful and low-power devices. Our findings show that even low-power devices, such as Raspberry Pi, perform well with small payload sizes on the Chrysalis network but face challenges with larger payloads. We observe that variation in transmission time increases as payload size grows, indicating the impact of PoW complexity, but it still is feasible for Raspberry Pi. We further validated our experimental setup to ensure the validity and accuracy of our approach through discussions with the IOTA Foundation’s technical team

    Stop Spreading the Data:PSM, Trust and Third-Party Services

    Get PDF

    A virtualized software based on the NVIDIA cuFFT library for image denoising:performance analysis

    Get PDF
    Generic Virtualization Service (GVirtuS) is a new solution for enabling GPGPU on Virtual Machines or low powered devices. This paper focuses on the performance analysis that can be obtained using a GPGPU virtualized software. Recently, GVirtuS has been extended in order to support CUDA ancillary libraries with good results. Here, our aim is to analyze the applicability of this powerful tool to a real problem, which uses the NVIDIA cuFFT library. As case study we consider a simple denoising algorithm, implementing a virtualized GPU-parallel software based on the convolution theorem in order to perform the noise removal procedure in the frequency domain. We report some preliminary tests in both physical and virtualized environments to study and analyze the potential scalability of such an algorithm. Peer-review under responsibility of the Conference Program Chairs
    • …
    corecore